Goto

Collaborating Authors

 parameter transfer learning


Learning Bound for Parameter Transfer Learning

Neural Information Processing Systems

We consider a transfer-learning problem by using the parameter transfer approach, where a suitable parameter of feature mapping is learned through one task and applied to another objective task. Then, we introduce the notion of the local stability of parametric feature mapping and parameter transfer learnability, and thereby derive a learning bound for parameter transfer algorithms. As an application of parameter transfer learning, we discuss the performance of sparse coding in self-taught learning. Although self-taught learning algorithms with plentiful unlabeled data often show excellent empirical performance, their theoretical analysis has not been studied. In this paper, we also provide the first theoretical learning bound for self-taught learning.


Reviews: Learning Bound for Parameter Transfer Learning

Neural Information Processing Systems

The parameter transfer learning framework described in the paper is very interesting and deserves attention. The approach taken by the authors (describe in Section 2) is sound but lacks clarity. The notation is well chosen, but not always properly explained (see my "Specific comments" below). Also, as the transfer learning framework is very similar to domain adaptation, which is studied in many papers (as Ben-David et al. 2007 cited by the authors), it would be interesting to discuss the connection of Theorem 1 with existing domain adaptation results. Section 3 is difficult to follow for a reader not familiar with sparse coding (like myself).


Learning Bound for Parameter Transfer Learning

Kumagai, Wataru

Neural Information Processing Systems

We consider a transfer-learning problem by using the parameter transfer approach, where a suitable parameter of feature mapping is learned through one task and applied to another objective task. Then, we introduce the notion of the local stability of parametric feature mapping and parameter transfer learnability, and thereby derive a learning bound for parameter transfer algorithms. As an application of parameter transfer learning, we discuss the performance of sparse coding in self-taught learning. Although self-taught learning algorithms with plentiful unlabeled data often show excellent empirical performance, their theoretical analysis has not been studied. In this paper, we also provide the first theoretical learning bound for self-taught learning.